input layer
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > Canada (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Germany > Brandenburg > Potsdam (0.05)
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > Canada (0.05)
- Health & Medicine > Diagnostic Medicine > Imaging (0.84)
- Health & Medicine > Health Care Technology (0.65)
Ignorance is Bliss: Robust Control via Information Gating
Informational parsimony provides a useful inductive bias for learning representations that achieve better generalization by being robust to noise and spurious correlations. We propose as a way to learn parsimonious representations that identify the minimal information required for a task. When gating information, we can learn to reveal as little information as possible so that a task remains solvable, or hide as little information as possible so that a task becomes unsolvable. We gate information using a differentiable parameterization of the signal-to-noise ratio, which can be applied to arbitrary values in a network, e.g., erasing pixels at the input layer or activations in some intermediate layer. When gating at the input layer, our models learn which visual cues matter for a given task.
Arbitrage-Free Bond and Yield Curve Forecasting with Neural Filters under HJM Constraints
We develop an arbitrage-free deep learning framework for yield curve and bond price forecasting based on the Heath-Jarrow-Morton (HJM) term-structure model and a dynamic Nelson-Siegel parameterization of forward rates. Our approach embeds a no-arbitrage drift restriction into a neural state-space architecture by combining Kalman, extended Kalman, and particle filters with recurrent neural networks (LSTM/CLSTM), and introduces an explicit arbitrage error regularization (AER) term during training. The model is applied to U.S. Treasury and corporate bond data, and its performance is evaluated for both yield-space and price-space predictions at 1-day and 5-day horizons. Empirically, arbitrage regularization leads to its strongest improvements at short maturities, particularly in 5-day-ahead forecasts, increasing market-consistency as measured by bid-ask hit rates and reducing dollar-denominated prediction errors.
- Europe > Austria > Salzburg > Salzburg (0.05)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
A Basic Evaluation of Neural Networks Trained with the Error Diffusion Learning Algorithm
This paper presents a comprehensive formulation of Kaneko's Error Diffusion Learning Algorithm (EDLA) and evaluates its effectiveness across parity check, regression, and image classification tasks. EDLA is a biologically inspired learning algorithm that provides an alternative to conventional backpropagation for training artificial neural networks. EDLA employs a single global error signal that diffuses across networks composed of paired positive and negative sublayers, eliminating traditional layer-wise error backpropagation. This study evaluates EDLA's effectiveness using benchmark tasks, such as parity check, regression, and image classification, by systematically varying the neuron count, network depth, and learning rates to assess its performance comprehensively. The experimental results demonstrate that EDLA achieves consistently high accuracy across multiple benchmarks, highlighting its effectiveness as a learning algorithm for neural networks. The choice of learning rate, neuron count, and network depth significantly influences EDLA's efficiency and convergence speed. Analysis of internal network representations reveals meaningful feature extraction capabilities, and the network's overall performance is found to be competitive with networks trained via conventional backpropagation, especially in shallow architectures. This study introduces EDLA, a biologically plausible alternative to traditional backpropagation previously underrecognized due to language barriers. By reformulating EDLA, systematically evaluating its performance, and presenting empirical evidence of its effectiveness, this study increases the visibility and accessibility of EDLA and contributes to biologically inspired training methodologies.
- North America > United States > New York (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- (2 more...)
Table A: Additional Experiments (real data)
We thank all the reviewers for their valuable suggestions and feedback. Table 2 contains L2 regularization. 's, of the proposed neural network (Section 3.3). We will elaborate this point with concrete examples in the revised submission. The description of the regularization terms is given in lines 162-168.
Functional Encryption in Secure Neural Network Training: Data Leakage and Practical Mitigations
Ioniţă, Alexandru, Ioniţă, Andreea
With the increased interest in artificial intelligence, Machine Learning as a Service provides the infrastructure in the Cloud for easy training, testing, and deploying models. However, these systems have a major privacy issue: uploading sensitive data to the Cloud, especially during training. Therefore, achieving secure Neural Network training has been on many researchers' minds lately. More and more solutions for this problem are built around a main pillar: Functional Encryption (FE). Although these approaches are very interesting and offer a new perspective on ML training over encrypted data, some vulnerabilities do not seem to be taken into consideration. In our paper, we present an attack on neural networks that uses FE for secure training over encrypted data. Our approach uses linear programming to reconstruct the original input, unveiling the previous security promises. To address the attack, we propose two solutions for secure training and inference that involve the client during the computation phase. One approach ensures security without relying on encryption, while the other uses function-hiding inner-product techniques.
- Oceania > Australia > Western Australia > Perth (0.04)
- Europe > Romania > Nord-Est Development Region > Iași County > Iași (0.04)